23 research outputs found

    The students’ acceptance of learning management systems in Saudi Arabian Universities

    Get PDF
    For distance learners, continuous official education is very important for improving knowledge and learning experience in order to meet the career challenges in the modern world. This work studies the success factors which affect the use of LMS and evaluates the ability to apply the proposed model in the field of distance learning (DL) particularly in higher education. The survey was carried out on higher education learners who were included in the DL instructions. This work has utilized a questionnaire that was modified from literature to inspect three measurements, system design, system usage, and system outcome. Utilizing the obtained survey data for students of DL (N=149), the path analysis discovered that the design of the system has a significant effect on the satisfaction of users and intention for using LMS which affects the use of the system. Consequently, the satisfaction of users and the system used has a great impact on the net benefit

    Improved Multi-Verse Optimizer Feature Selection Technique With Application To Phishing, Spam, and Denial Of Service Attacks

    Get PDF
    Intelligent classification systems proved their merits in different fields including cybersecurity. However, most cybercrime issues are characterized of being dynamic and not static classification problems where the set of discriminative features keep changing with time. This indeed requires revising the cybercrime classification system and pick a group of features that preserve or enhance its performance. Not only this but also the system compactness is regarded as an important factor to judge on the capability of any classification system where cybercrime classification systems are not an exception. The current research proposes an improved feature selection algorithm that is inspired from the well-known multi-verse optimizer (MVO) algorithm. Such an algorithm is then applied to 3 different cybercrime classification problems namely phishing websites, spam, and denial of service attacks. MVO is a population-based approach which stimulates a well-known theory in physics namely multi-verse theory. MVO uses the black and white holes principles for exploration, and wormholes principle for exploitation. A roulette selection schema is used for scientifically modeling the principles of white hole and black hole in exploration phase, which bias to the good solutions, in this case the solutions will be moved toward the best solution and probably to lose the diversity, other solutions may contain important information but didn’t get chance to be improved. Thus, this research will improve the exploration of the MVO by introducing the adaptive neighborhood search operations in updating the MVO solutions. The classification phase has been done using a classifier to evaluate the results and to validate the selected features. Empirical outcomes confirmed that the improved MVO (IMVO) algorithm is capable to enhance the search capability of MVO, and outperform other algorithm involved in comparison

    Neighborhood search methods with Moth Optimization algorithm as a wrapper method for feature selection problems

    Get PDF
    Feature selection methods are used to select a subset of features from data, therefore only the useful information can be mined from the samples to get better accuracy and improves the computational efficiency of the learning model. Moth-flam Optimization (MFO) algorithm is a population-based approach, that simulates the behavior of real moth in nature, one drawback of the MFO algorithm is that the solutions move toward the best solution, and it easily can be stuck in local optima as we investigated in this paper, therefore, we proposed a MFO Algorithm combined with a neighborhood search method for feature selection problems, in order to avoid the MFO algorithm getting trapped in a local optima, and helps in avoiding the premature convergence, the neighborhood search method is applied after a predefined number of unimproved iterations (the number of tries fail to improve the current solution). As a result, the proposed algorithm shows good performance when compared with the original MFO algorithm and with state-of-the-art approaches

    Bivariate modified hotelling’s T2 charts using bootstrap data

    Get PDF
    The conventional Hotelling’s  charts are evidently inefficient as it has resulted in disorganized data with outliers, and therefore, this study proposed the application of a novel alternative robust Hotelling’s  charts approach. For the robust scale estimator , this approach encompasses the use of the Hodges-Lehmann vector and the covariance matrix in place of the arithmetic mean vector and the covariance matrix, respectively.  The proposed chart was examined performance wise. For the purpose, simulated bivariate bootstrap datasets were used in two conditions, namely independent variables and dependent variables. Then, assessment was made to the modified chart in terms of its robustness. For the purpose, the likelihood of outliers’ detection and false alarms were computed. From the outcomes from the computations made, the proposed charts demonstrated superiority over the conventional ones for all the cases tested

    A novel population-based local search for nurse rostering problem

    Get PDF
    Population-based approaches regularly are better than single based (local search) approaches in exploring the search space. However, the drawback of population-based approaches is in exploiting the search space. Several hybrid approaches have proven their efficiency through different domains of optimization problems by incorporating and integrating the strength of population and local search approaches. Meanwhile, hybrid methods have a drawback of increasing the parameter tuning. Recently, population-based local search was proposed for a university course-timetabling problem with fewer parameters than existing approaches, the proposed approach proves its effectiveness. The proposed approach employs two operators to intensify and diversify the search space. The first operator is applied to a single solution, while the second is applied for all solutions. This paper aims to investigate the performance of population-based local search for the nurse rostering problem. The INRC2010 database with a dataset composed of 69 instances is used to test the performance of PB-LS. A comparison was made between the performance of PB-LS and other existing approaches in the literature. Results show good performances of proposed approach compared to other approaches, where population-based local search provided best results in 55 cases over 69 instances used in experiments

    Use of production functions in assessing the profitability of shares of insurance companies

    Get PDF
    In this study the production functions (Cobb-Douglas, Zener-Rivanker, and the transcendental production function) have been used to assess the profitability of insurance companies, by reformulating these nonlinear functions based on the introduction of a set of variables that contribute to increase the explanatory capacity of the model. Then the best production function commensurate with the nature of the variable representing the profitability of insurance companies was chosen, to use it to assess the efficiency of their profitability versus the use of different factors of production and thus the possibility of using it in forecasting. It was found that the proposed model of the production function "Zener-Rivanker" is the best production functions representing the profitability of the Tawuniya and Bupa Insurance Companies. The proposed model of the Cobb-Douglas production function is suitable for the results of both Enaya and Sanad Cooperative Insurance Companies. The explanatory capacity of the production functions was also increased when the proposed variables were added (net subscribed premiums-net claims incurred)

    Fog computing scheduling algorithm for smart city

    Get PDF
    With the advent of the number of smart devices across the globe, increasing the number of users using the Internet. The main aim of the fog computing (FC) paradigm is to connect huge number of smart objects (billions of object) that can make a bright future for smart cities. Due to the large deployments of smart devices, devices are expected to generate huge amounts of data and forward the data through the Internet. FC also refers to an edge computing framework that mitigates the issue by applying the process of knowledge discovery using a data analysis approach to the edges. Thus, the FC approaches can work together with the internet of things (IoT) world, which can build a sustainable infrastructure for smart cities. In this paper, we propose a scheduling algorithm namely the weighted round-robin (WRR) scheduling algorithm to execute the task from one fog node (FN) to another fog node to the cloud. Firstly, a fog simulator is used with the emergent concept of FC to design IoT infrastructure for smart cities. Then, spanning-tree routing (STP) protocol is used for data collection and routing. Further, 5G networks are proposed to establish fast transmission and communication between users. Finally, the performance of our proposed system is evaluated in terms of response time, latency, and amount of data used

    Emergent situations for smart cities: A survey

    Get PDF
    A smart city is a community that uses communication and information technology to improve sustainability, livability, and feasibility. As any community, there are always unexpected emergencies, which must be treated to preserve the regular order. However, a smart system is needed to be able to respond effectively to these emergent situations. The contribution made in this survey is twofold. Firstly, it provides a comprehensive exhaustive and categorized overview of the existing surveys for smart cities.  The categorization is based on several criteria such as structures, benefits, advantages, applications, challenges, issues, and future directions. Secondly, it aims to analyze several studies with respect to emergent situations and management to smart cities. The analysis is based on several factors such as the challenges and issues discussed, the solutions proposed, and opportunities for future research. The challenges include security, privacy, reliability, performance, scalability, heterogeneity, scheduling, resource management, and latency. Few studies have investigated the emergent situations of smart cities and despite the importance of latency factor for smart city applications, it is rarely discussed

    Applying the big bang-big crunch metaheuristic to large-sized operational problems

    Get PDF
    In this study, we present an investigation of comparing the capability of a big bang-big crunch metaheuristic (BBBC) for managing operational problems including combinatorial optimization problems. The BBBC is a product of the evolution theory of the universe in physics and astronomy. Two main phases of BBBC are the big bang and the big crunch. The big bang phase involves the creation of a population of random initial solutions, while in the big crunch phase these solutions are shrunk into one elite solution exhibited by a mass center. This study looks into the BBBC’s effectiveness in assignment and scheduling problems. Where it was enhanced by incorporating an elite pool of diverse and high quality solutions; a simple descent heuristic as a local search method; implicit recombination; Euclidean distance; dynamic population size; and elitism strategies. Those strategies provide a balanced search of diverse and good quality population. The investigation is conducted by comparing the proposed BBBC with similar metaheuristics. The BBBC is tested on three different classes of combinatorial optimization problems; namely, quadratic assignment, bin packing, and job shop scheduling problems. Where the incorporated strategies have a greater impact on the BBBC's performance. Experiments showed that the BBBC maintains a good balance between diversity and quality which produces high-quality solutions, and outperforms other identical metaheuristics (e.g. swarm intelligence and evolutionary algorithms) reported in the literature

    Hybrid feature selection method based on particle swarm optimization and adaptive local search method

    Get PDF
    Machine learning has been expansively examined with data classification as the most popularly researched subject. The accurateness of prediction is impacted by the data provided to the classification algorithm. Meanwhile, utilizing a large amount of data may incur costs especially in data collection and preprocessing. Studies on feature selection were mainly to establish techniques that can decrease the number of utilized features (attributes) in classification, also using data that generate accurate prediction is important. Hence, a particle swarm optimization (PSO) algorithm is suggested in the current article for selecting the ideal set of features. PSO algorithm showed to be superior in different domains in exploring the search space and local search algorithms are good in exploiting the search regions. Thus, we propose the hybridized PSO algorithm with an adaptive local search technique which works based on the current PSO search state and used for accepting the candidate solution. Having this combination balances the local intensification as well as the global diversification of the searching process. Hence, the suggested algorithm surpasses the original PSO algorithm and other comparable approaches, in terms of performance
    corecore